Does utilitarianism make unrealistic demands on our limited capacity for knowledge?

Mill essay III

Tuesday, 16 May, 2000

Dr Tasioulas

 

In its most basic form, utilitarianism directs every moral agent to act so that the aggregate welfare of all moral agents is maximised. On a first reading, we might be lulled into hoping that utilitarianism offers us a means to elevate our world to the best of all possible worlds � who would argue when faced with such a utopian vision? I think utilitarianism can be criticised by two main lines: our limitations as moral agents are too crippling for us to be able to abide by a utilitarian morality; and even if by superhuman effort and miraculous understanding we were to overcome these limitations, our lives in a utilitarian utopia would be lacking compared to a prudential life. Seen the other way around: despite being a supposedly maximum-utility world, there are other worlds I would actually rather live in; and a maximum or even high utility world would not result from our overly-fallible attempts to follow the utilitarian standard.

 

I will focus largely on the issue of utilitarianism�s unrealistic demands, since the question of utilitarianism�s desirability opens every other question in ethics. Mill was aware of being criticised for requiring too much of us in his morality. His answer was to dismiss this as besides the point, since an ethical theory shows us what we ought to strive for, irrespective of whether we want or are able to meet this aim. To an extent, he is justified in this � no morality will be effortless, and we are in a better position having a demanding criterion than none at all. Moreover, because utilitarian rightness and wrongness are (non-mutually exclusive) continua, not absolutes, utilitarianism is tolerant to error � we might not always choose the best action, but if utilitarianism helps us act in a better way than we otherwise would, it is valuable to us nevertheless. However, if a moral theory is too demanding then there is a danger of our grossly misapplying it, or that few will recognise its validity. A moral theory needs to coincide roughly with, as well as refine, common sense ethics, which is predicated firmly of our nature and limitations.

 

Griffin is right to stress our status as agents, and to attack any objective ethical theory which is blind to our capacity to employ it. He shows there to be four areas which delimit our moral agency:

The good life

There appears to be a conflict between classical utilitarianism�s emphasis on maximising a relatively short-term mental state, and the long-term, life-structuring, partial, non-utilitarian prudential values, such as enjoyment, understanding, accomplishment, deep personal relations, autonomy and liberty

The limits of the will

We are constrained (by evolution) in what we can will ourselves into doing, and autonomy is an essential component of morality. Mill points to the power of education and indoctrination, and history has shown the power of inspiring goals such as religion in enabling us to transcend our capacities

The demands of the social life

We are wedded to our obligation-generating social, political and economic institutions

The limits of knowledge.

Our knowledge of both past and future is fuzzy, subjective and narrow � constrained by memory, understanding, situation, language, personality and point of view. Although, we supposedly have the whole of human history as our guide to help in making complex utilitarian calculations, our knowledge of that history is restricted to what is available to us in writing, conversation or direct experience, and how much of this we actually remember. When using this knowledge for predictive and evaluative purposes, our appraisal of the current situation is necessarily subjective and patchy, and our guesses about consequences can be misled by insufficient, ignored or over-emphasised facts about the world, physics, others� motivations or chance. At every stage, our decisions are coloured by our interpretation of events and issues, past experience and unconscious biases, what lessons we have learned and a maximum volume of information we can absorb, process and imagine.

 

Given this outline of our capacities, we need to look at utilitarianism and the demands it makes on us.

 

One of the difficulties of discussing utilitarianism is that it comes in many different flavours, each with its own particular demands and appeal. We are only interested in evaluating the strongest form that utilitarianism can take � by this, I mean the form which, if followed to the best of our ability, would give rise to the world we would most want to live in. By starting with the most demanding, and titrating down to a theory which seems both workable and valuable, I hope to reach a compromise between our limitations and utilitarianism�s elegance and appeal.

Single-level act utilitarianism is probably the most basic formulation of the utilitarian formula: an action is right insofar as it increases the aggregate utility. Strictly then, we should consciously consider to what extent our every action will affect the aggregate utility, requiring one be entirely impartial and rational, and to act directly in line with this calculation.

Consequentialism defines the right action as being the one that gives rise to the best state of affairs. Extending single-level act utilitarianism, we have welfarist consequentialism, perhaps the most demanding and rational form of utilitarianism. Here, the right action is the one that gives rise to the greatest total utility over time.

Griffin gives various scenarios where one has the choice of sacrificing one life to save many. In the most clear-cut, such as the fat tourist trapping his five friends while the water level rises, a consequentialist would see no intrinsic problem in dynamiting him out of the hole to let the rest of them climb out. However, the example of the surgeon considering transplanting the unwilling recluse�s body parts to save five other patients demonstrates the power and fundamental impracticability of consequentalism. It might be that no one would find out, in which case the surgeon would have done the right thing. Or it might be that rumours of surgeons illicitly �playing God� might surface if such actions became commonplace, damaging the doctor-patient relationship and causing six fatal heart-attacks in rich misers throughout the country. Or it might be that the recluse did have a loving family in Australia, or that his generous donations to charity dry up, or that one of those five patients is a serial killer.

This brings up the question of how far to look ahead in our evaluation of the endless ramifications of an action. Unless we intend to look to a Judgement Day, there is no single, ultimate, eventual state of affairs to evaluate against. Should we instead add up the aggregate over time, so that our calculations of total utility extend not only to all other moral agents, but also into the infinite future? But then, how can we be blamed (or lauded) if, like the oblivious butterfly, our actions indirectly cause a tidal wave on the other side of the globe, perhaps years in the future. Or need we only judge the morality of an action by the consequences we can foresee? This is in line with Sidgwick�s distinction between objective and subjective rightness, where a decision based on the probability of good or bad likely to ensue might lead a surgeon to opt for a less risky but less beneficial operation � this would be the subjectively right thing to do, though he might in actuality be doing the objectively wrong thing if (unbeknownst to him) the riskier operation would have been successful.

This seems to be only the first step away from our objective utilitarianism, where actions are always right or wrong no matter who performs them, towards a morality based on individual differences, where one person can be said to have done the right thing but another should have known better. Either way, such a limited consequentialism implicitly acknowledges that the right action is no longer necessarily tied to a maximising of utility. However, it is the only form of consequentialism that we can hope to apply, and indeed is similar to our common-sense evaluation of responsibility consequences based on intentions.

In the opposite direction lie the rule utilitarians, who emphasise Mill�s discussion of the place of customary morality in utilitarianism. They seek to rest what seems like an objective list of values on a utilitarian framework, by recognising the ultimate standard of utility only to justify secondary principles, such as honesty, loyalty or health. This has the disadvantage of distancing oneself from the appealing simplicity of the principle of utility, as well as being open to all the criticisms of older objective list ethical theories, such as �Which values should feature on your list?�

I do not intend to dwell on such rule worship forms of utilitarianism because I think Mill�s own �multi-level� formulation (with which it seems clear he deserves to be credited) wholly supercedes pure rule utilitarianism. He explains the need for such a non-act utilitarian level of customary morality as being a way to make act utilitarianism more practicable. Rather than ponderously calculating the act utilitarian effects of our every action, we can work on the basis of secondary principles, reliably based on the experience of mankind over the ages, only resorting to our ultimate principle of utility in situations of conflict. In this neat way, he reduces the demands of time of calculation, as well as the demands of knowledge � we find it quite manageable to decide our actions on the basis of a small number of moral principles, such as those the rule utilitarians might favour. Then, when we turn to ethics to provide direction in exceptional circumstances, we perform the best act utilitarian calculation, and act in a subjectively right manner.

In fact, Mill further acknowledges that there is no intrinsic status to these secondary principles by stating that philosophers still have the job of refining them to ensure that they do result in maximising utility in the widest possible range of circumstances, to make our job as moral agents easier.

 

Our discussion seems to have highlighted two areas where we find the demands that utilitarianism places upon us particularly heavy: our partiality and ignorance. Certainly, Mill�s multi-level utilitarianism alleviates the otherwise impossible burden on our knowledge, but it requires an almost schizophrenic adherence to a customary morality with no intrinsic weight - it leaves us in the position of living rule-based, partial lives on a day-to-day basis, but having to set aside all familial loyalties, preferences and friendships when faced with problematic conflicts in our secondary principles. If there is an axe-murderer chasing my best friend, do I lie about where my friend is hiding to protect him? Well, since there is a conflict here between loyalty and honesty, we resort to the utilitarian principle. Mill�s system seems to break down here, if for example, the axe-murderer would derive enormous utility from using my best friend�s limbs as a hat rack, since a utilitarian perspective might justify my telling him the truth about my friend�s hiding place. The answer we would like our system to give seems to highlight a problem with partiality deep in the utilitarian enterprise, resolvable only withsome sort of deontological weighting (e.g. towards loyalty to friends) which we cannot justify.

 

Having said this, utilitarianism is valuable even if it helps us make good, but imperfect, choices. Paradoxically, we find ourselves relying on the limitations of the human will regarding impartial benevolence to ensure that I make the �right� (in an obviously non-utilitarian sense) choice to save my friend. This highlights the possibility, however, that in applying a principle which places demands far above our abilities and knowledge, we may make very wrong choices indeed. This human frailty is why Mill offers us customary morality. His multi-level view complicates utilitarianism, but it does appear to make fairly realistic demands and give rise largely agreeable action.